518 research outputs found
Local/global analysis of the stationary solutions of some neural field equations
Neural or cortical fields are continuous assemblies of mesoscopic models,
also called neural masses, of neural populations that are fundamental in the
modeling of macroscopic parts of the brain. Neural fields are described by
nonlinear integro-differential equations. The solutions of these equations
represent the state of activity of these populations when submitted to inputs
from neighbouring brain areas. Understanding the properties of these solutions
is essential in advancing our understanding of the brain. In this paper we
study the dependency of the stationary solutions of the neural fields equations
with respect to the stiffness of the nonlinearity and the contrast of the
external inputs. This is done by using degree theory and bifurcation theory in
the context of functional, in particular infinite dimensional, spaces. The
joint use of these two theories allows us to make new detailed predictions
about the global and local behaviours of the solutions. We also provide a
generic finite dimensional approximation of these equations which allows us to
study in great details two models. The first model is a neural mass model of a
cortical hypercolumn of orientation sensitive neurons, the ring model. The
second model is a general neural field model where the spatial connectivity
isdescribed by heterogeneous Gaussian-like functions.Comment: 38 pages, 9 figure
Stochastic neural field equations: A rigorous footing
We extend the theory of neural fields which has been developed in a
deterministic framework by considering the influence spatio-temporal noise. The
outstanding problem that we here address is the development of a theory that
gives rigorous meaning to stochastic neural field equations, and conditions
ensuring that they are well-posed. Previous investigations in the field of
computational and mathematical neuroscience have been numerical for the most
part. Such questions have been considered for a long time in the theory of
stochastic partial differential equations, where at least two different
approaches have been developed, each having its advantages and disadvantages.
It turns out that both approaches have also been used in computational and
mathematical neuroscience, but with much less emphasis on the underlying
theory. We present a review of two existing theories and show how they can be
used to put the theory of stochastic neural fields on a rigorous footing. We
also provide general conditions on the parameters of the stochastic neural
field equations under which we guarantee that these equations are well-posed.
In so doing we relate each approach to previous work in computational and
mathematical neuroscience. We hope this will provide a reference that will pave
the way for future studies (both theoretical and applied) of these equations,
where basic questions of existence and uniqueness will no longer be a cause for
concern
Illusions in the Ring Model of visual orientation selectivity
The Ring Model of orientation tuning is a dynamical model of a hypercolumn of
visual area V1 in the human neocortex that has been designed to account for the
experimentally observed orientation tuning curves by local, i.e.,
cortico-cortical computations. The tuning curves are stationary, i.e. time
independent, solutions of this dynamical model. One important assumption
underlying the Ring Model is that the LGN input to V1 is weakly tuned to the
retinal orientation and that it is the local computations in V1 that sharpen
this tuning. Because the equations that describe the Ring Model have built-in
equivariance properties in the synaptic weight distribution with respect to a
particular group acting on the retinal orientation of the stimulus, the model
in effect encodes an infinite number of tuning curves that are arbitrarily
translated with respect to each other. By using the Orbit Space Reduction
technique we rewrite the model equations in canonical form as functions of
polynomials that are invariant with respect to the action of this group. This
allows us to combine equivariant bifurcation theory with an efficient numerical
continuation method in order to compute the tuning curves predicted by the Ring
Model. Surprisingly some of these tuning curves are not tuned to the stimulus.
We interpret them as neural illusions and show numerically how they can be
induced by simple dynamical stimuli. These neural illusions are important
biological predictions of the model. If they could be observed experimentally
this would be a strong point in favour of the Ring Model. We also show how our
theoretical analysis allows to very simply specify the ranges of the model
parameters by comparing the model predictions with published experimental
observations.Comment: 33 pages, 12 figure
Asymptotic description of stochastic neural networks. II - Characterization of the limit law
We continue the development, started in of the asymptotic description of
certain stochastic neural networks. We use the Large Deviation Principle (LDP)
and the good rate function H announced there to prove that H has a unique
minimum mu_e, a stationary measure on the set of trajectories. We characterize
this measure by its two marginals, at time 0, and from time 1 to T. The second
marginal is a stationary Gaussian measure. With an eye on applications, we show
that its mean and covariance operator can be inductively computed. Finally we
use the LDP to establish various convergence results, averaged and quenched
Asymptotic description of stochastic neural networks. I - existence of a Large Deviation Principle
We study the asymptotic law of a network of interacting neurons when the
number of neurons becomes infinite. The dynamics of the neurons is described by
a set of stochastic differential equations in discrete time. The neurons
interact through the synaptic weights which are Gaussian correlated random
variables. We describe the asymptotic law of the network when the number of
neurons goes to infinity. Unlike previous works which made the biologically
unrealistic assumption that the weights were i.i.d. random variables, we assume
that they are correlated. We introduce the process-level empirical measure of
the trajectories of the solutions to the equations of the finite network of
neurons and the averaged law (with respect to the synaptic weights) of the
trajectories of the solutions to the equations of the network of neurons. The
result is that the image law through the empirical measure satisfies a large
deviation principle with a good rate function. We provide an analytical
expression of this rate function in terms of the spectral representation of
certain Gaussian processes
Large Deviations of a Spatially-Stationary Network of Interacting Neurons
In this work we determine a process-level Large Deviation Principle (LDP) for
a model of interacting neurons indexed by a lattice . The neurons
are subject to noise, which is modelled as a correlated martingale. The
probability law governing the noise is strictly stationary, and we are
therefore able to find a LDP for the probability laws governing the
stationary empirical measure generated by the neurons in a cube
of length . We use this LDP to determine an LDP for the neural network
model. The connection weights between the neurons evolve according to a
learning rule / neuronal plasticity, and these results are adaptable to a large
variety of neural network models. This LDP is of great use in the mathematical
modelling of neural networks, because it allows a quantification of the
likelihood of the system deviating from its limit, and also a determination of
which direction the system is likely to deviate. The work is also of interest
because there are nontrivial correlations between the neurons even in the
asymptotic limit, thereby presenting itself as a generalisation of traditional
mean-field models
Prediction via the Quantile-Copula Conditional Density Estimator
To make a prediction of a response variable from an explanatory one which takes into account features such as multimodality, a nonparametric approach based on an estimate of the conditional\ud
density is advocated and considered. In particular, we build point and interval predictors based on the quantile-copula estimator of the conditional density by Faugeras [8]. The consistency of these\ud
predictors is proved through a uniform consistency result of the conditional density estimator. Eventually, the practical implementation of these predictors is discussed. A simulation on a real data set illustrates the proposed methods
- …